Listeners normalize speech for contextual speech rate even without an explicit recognition task
نویسندگان
چکیده
منابع مشابه
Contextual variability during speech-in-speech recognition.
This study examined the influence of background language variation on speech recognition. English listeners performed an English sentence recognition task in either "pure" background conditions in which all trials had either English or Dutch background babble or in mixed background conditions in which the background language varied across trials (i.e., a mix of English and Dutch or one of these...
متن کاملSpeech segmentation without speech recognition
In this paper, we presented a semantic speech segmentation approach, in particular sentence segmentation, without speech recognition. In order to get phoneme level information without word recognition information, a novel vowel/consonant/pause (V/C/P) classification is proposed. An adaptive pause detection method is also presented to adapt to various background and environment. Three feature se...
متن کاملthe effects of speech rate,prosodic features, and blurred speech on iranian efl learners listening comprehension
کلید واژه ها به زبان انگلیسی: effect of speech rate on listening comprehension, blurred speech,segmental and suprasegmental features,authentic speech,intelligibility, discrimination, omission, assimilation چکیده: سرعت مطالب شنیداری در کلام پیوسته بطور کلی همواره کابوسی بوده برای یادگیرنده های زبان دوم و بالاخص برای شنوندگان ایرانی. علی رغم عقل سلیم که کلام با سرعت کندتری فعالیتهای درک مطلب شن...
15 صفحه اولContextual Prediction Models for Speech Recognition
We introduce an approach to biasing language models towards known contexts without requiring separate language models or explicit contextually-dependent conditioning contexts. We do so by presenting an alternative ASR objective, where we predict the acoustics and words given the contextual cue, such as the geographic location of the speaker. A simple factoring of the model results in an additio...
متن کاملAn explicit independence constraint for factorised adaptation in speech recognition
Speech signals are usually affected by multiple acoustic factors, such as speaker characteristics and environment differences. Usually, the combined effect of these factors is modelled by a single transform. Acoustic factorisation splits the transform into several factor transforms, each modelling only one factor. This allows, for example, estimating a speaker transform in a noise condition and...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: The Journal of the Acoustical Society of America
سال: 2019
ISSN: 0001-4966
DOI: 10.1121/1.5116004